POEM: 1-Bit Point-Wise Operations Based on E-M for Point Cloud Processing
157
FIGURE 6.2
The variety of BiRe-ID’s final mAPs on Market-1501. An ablation study on λ and μ. ResNet-
18 backbone is employed.
baseline network, as shown in the second section of Table 6.5. By adding all KR-GAL and
FR-GAL, our BiRe-ID achieves 10.0% higher mAP and 9.8% higher Rank@1 accuracy than
the baseline, even approximating the corresponding real-valued network accuracy.
6.3
POEM: 1-Bit Point-Wise Operations Based on E-M for Point
Cloud Processing
In this section, we first implement a baseline XNOR-Net-based [199] 1-bit point cloud net-
work, which shows that the performance drop is mainly caused by two drawbacks. First,
the layer-wise weights of XNOR-Net roughly follow a Gaussian distribution with a mean
value around 0. However, such a distribution is subject to disturbance caused by the noise
contained in the raw point cloud data [86]. As a result, such a Gaussian distributed weight
(around 0) will accordingly change its sign, i.e., the binarization result will change dramat-
ically. This explains why the baseline network is ineffective in processing the point cloud
data and achieves a worse convergence, as shown in Fig. 6.3 (a). In contrast, the bimodal
distribution will gain its robustness against this noise. Second, XNOR-Net fails to adapt it-
self to the characteristics of cloud data, when computing the scale factor using a nonlearning
method.
To address these issues, we introduce 1-bit point-wise operations based on Expectation-
Maximization (POEM) [261] to efficiently process the point cloud data. We exploit the
TABLE 6.1
The effects of different components in BiRe-ID on the Rank@1 and mAP on the
Market-1501 dataset.
ResNet-18
Rank@1 (%)
mAP (%)
XNOR-Net
63.8
40.1
Proposed baseline network
74.9
54.0
Proposed baseline network + KR-GAL
80.0
61.1
Proposed baseline network + FR-GAL
78.5
58.1
Proposed baseline network + KR-GAL + FR-GAL (BiRe-ID)
84.1
64.0
Real-valued Counterpart
85.1
64.3